This data set contains simulated data that mimics customer behavior on the Starbucks rewards mobile app. Once every few days, Starbucks sends out an offer to users of the mobile app. An offer can be merely an advertisement for a drink or an actual offer such as a discount or BOGO (buy one get one free). Some users might not receive any offer during certain weeks.
Not all users receive the same offer, and that is the challenge to solve with this data set.
Your task is to combine transaction, demographic and offer data to determine which demographic groups respond best to which offer type. This data set is a simplified version of the real Starbucks app because the underlying simulator only has one product whereas Starbucks actually sells dozens of products.
Every offer has a validity period before the offer expires. As an example, a BOGO offer might be valid for only 5 days. You'll see in the data set that informational offers have a validity period even though these ads are merely providing information about a product; for example, if an informational offer has 7 days of validity, you can assume the customer is feeling the influence of the offer for 7 days after receiving the advertisement.
You'll be given transactional data showing user purchases made on the app including the timestamp of purchase and the amount of money spent on a purchase. This transactional data also has a record for each offer that a user receives as well as a record for when a user actually views the offer. There are also records for when a user completes an offer.
Keep in mind as well that someone using the app might make a purchase through the app without having received an offer or seen an offer.
To give an example, a user could receive a discount offer buy 10 dollars get 2 off on Monday. The offer is valid for 10 days from receipt. If the customer accumulates at least 10 dollars in purchases during the validity period, the customer completes the offer.
However, there are a few things to watch out for in this data set. Customers do not opt into the offers that they receive; in other words, a user can receive an offer, never actually view the offer, and still complete the offer. For example, a user might receive the "buy 10 dollars get 2 dollars off offer", but the user never opens the offer during the 10 day validity period. The customer spends 15 dollars during those ten days. There will be an offer completion record in the data set; however, the customer was not influenced by the offer because the customer never viewed the offer.
This makes data cleaning especially important and tricky.
You'll also want to take into account that some demographic groups will make purchases even if they don't receive an offer. From a business perspective, if a customer is going to make a 10 dollar purchase without an offer anyway, you wouldn't want to send a buy 10 dollars get 2 dollars off offer. You'll want to try to assess what a certain demographic group will buy when not receiving any offers.
Because this is a capstone project, you are free to analyze the data any way you see fit. For example, you could build a machine learning model that predicts how much someone will spend based on demographics and offer type. Or you could build a model that predicts whether or not someone will respond to an offer. Or, you don't need to build a machine learning model at all. You could develop a set of heuristics that determine what offer you should send to each customer (i.e., 75 percent of women customers who were 35 years old responded to offer A vs 40 percent from the same demographic to offer B, so send offer A).
The data is contained in three files:
Here is the schema and explanation of each variable in the files:
portfolio.json
profile.json
transcript.json
import pandas as pd
import numpy as np
import math
import json
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
import warnings
from datetime import datetime
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn import metrics
# read in the json files
portfolio = pd.read_json('portfolio.json', orient='records', lines=True)
profile = pd.read_json('profile.json', orient='records', lines=True)
transcript = pd.read_json('transcript.json', orient='records', lines=True)
portfolio
| reward | channels | difficulty | duration | offer_type | id | |
|---|---|---|---|---|---|---|
| 0 | 10 | [email, mobile, social] | 10 | 7 | bogo | ae264e3637204a6fb9bb56bc8210ddfd |
| 1 | 10 | [web, email, mobile, social] | 10 | 5 | bogo | 4d5c57ea9a6940dd891ad53e9dbe8da0 |
| 2 | 0 | [web, email, mobile] | 0 | 4 | informational | 3f207df678b143eea3cee63160fa8bed |
| 3 | 5 | [web, email, mobile] | 5 | 7 | bogo | 9b98b8c7a33c4b65b9aebfe6a799e6d9 |
| 4 | 5 | [web, email] | 20 | 10 | discount | 0b1e1539f2cc45b7b9fa7c272da2e1d7 |
| 5 | 3 | [web, email, mobile, social] | 7 | 7 | discount | 2298d6c36e964ae4a3e7e9706d1fb8c2 |
| 6 | 2 | [web, email, mobile, social] | 10 | 10 | discount | fafdcd668e3743c1bb461111dcafc2a4 |
| 7 | 0 | [email, mobile, social] | 0 | 3 | informational | 5a8bc65990b245e5a138643cd4eb9837 |
| 8 | 5 | [web, email, mobile, social] | 5 | 5 | bogo | f19421c1d4aa40978ebb69ca19b0e20d |
| 9 | 2 | [web, email, mobile] | 10 | 7 | discount | 2906b810c7d4411798c6938adc9daaa5 |
portfolio.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 10 entries, 0 to 9 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 reward 10 non-null int64 1 channels 10 non-null object 2 difficulty 10 non-null int64 3 duration 10 non-null int64 4 offer_type 10 non-null object 5 id 10 non-null object dtypes: int64(3), object(3) memory usage: 608.0+ bytes
portfolio['offer_type'].value_counts()
bogo 4 discount 4 informational 2 Name: offer_type, dtype: int64
sns.countplot(x=portfolio['offer_type']);
Popular offers
portfolio = portfolio.join(portfolio.channels.str.join('|').str.get_dummies())
portfolio.drop('channels',axis=1,inplace=True)
portfolio.rename(columns={'id':'offer_id'},inplace=True)
portfolio
| reward | difficulty | duration | offer_type | offer_id | mobile | social | web | ||
|---|---|---|---|---|---|---|---|---|---|
| 0 | 10 | 10 | 7 | bogo | ae264e3637204a6fb9bb56bc8210ddfd | 1 | 1 | 1 | 0 |
| 1 | 10 | 10 | 5 | bogo | 4d5c57ea9a6940dd891ad53e9dbe8da0 | 1 | 1 | 1 | 1 |
| 2 | 0 | 0 | 4 | informational | 3f207df678b143eea3cee63160fa8bed | 1 | 1 | 0 | 1 |
| 3 | 5 | 5 | 7 | bogo | 9b98b8c7a33c4b65b9aebfe6a799e6d9 | 1 | 1 | 0 | 1 |
| 4 | 5 | 20 | 10 | discount | 0b1e1539f2cc45b7b9fa7c272da2e1d7 | 1 | 0 | 0 | 1 |
| 5 | 3 | 7 | 7 | discount | 2298d6c36e964ae4a3e7e9706d1fb8c2 | 1 | 1 | 1 | 1 |
| 6 | 2 | 10 | 10 | discount | fafdcd668e3743c1bb461111dcafc2a4 | 1 | 1 | 1 | 1 |
| 7 | 0 | 0 | 3 | informational | 5a8bc65990b245e5a138643cd4eb9837 | 1 | 1 | 1 | 0 |
| 8 | 5 | 5 | 5 | bogo | f19421c1d4aa40978ebb69ca19b0e20d | 1 | 1 | 1 | 1 |
| 9 | 2 | 10 | 7 | discount | 2906b810c7d4411798c6938adc9daaa5 | 1 | 1 | 0 | 1 |
profile.head()
| gender | age | id | became_member_on | income | |
|---|---|---|---|---|---|
| 0 | None | 118 | 68be06ca386d4c31939f3a4f0e3dd783 | 20170212 | NaN |
| 1 | F | 55 | 0610b486422d4921ae7d2bf64640c50b | 20170715 | 112000.0 |
| 2 | None | 118 | 38fe809add3b4fcf9315a9694bb96ff5 | 20180712 | NaN |
| 3 | F | 75 | 78afa995795e4d85b5d9ceeca43f5fef | 20170509 | 100000.0 |
| 4 | None | 118 | a03223e636434f42ac4c3df47e8bac43 | 20170804 | NaN |
profile.shape
(17000, 5)
profile.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 17000 entries, 0 to 16999 Data columns (total 5 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 gender 14825 non-null object 1 age 17000 non-null int64 2 id 17000 non-null object 3 became_member_on 17000 non-null int64 4 income 14825 non-null float64 dtypes: float64(1), int64(2), object(2) memory usage: 664.2+ KB
profile.isna().sum()
gender 2175 age 0 id 0 became_member_on 0 income 2175 dtype: int64
profile.describe()
| age | became_member_on | income | |
|---|---|---|---|
| count | 17000.000000 | 1.700000e+04 | 14825.000000 |
| mean | 62.531412 | 2.016703e+07 | 65404.991568 |
| std | 26.738580 | 1.167750e+04 | 21598.299410 |
| min | 18.000000 | 2.013073e+07 | 30000.000000 |
| 25% | 45.000000 | 2.016053e+07 | 49000.000000 |
| 50% | 58.000000 | 2.017080e+07 | 64000.000000 |
| 75% | 73.000000 | 2.017123e+07 | 80000.000000 |
| max | 118.000000 | 2.018073e+07 | 120000.000000 |
profile[profile['age']== 118].age.count()
2175
profile[profile['income'].isna()]
| gender | age | id | became_member_on | income | |
|---|---|---|---|---|---|
| 0 | None | 118 | 68be06ca386d4c31939f3a4f0e3dd783 | 20170212 | NaN |
| 2 | None | 118 | 38fe809add3b4fcf9315a9694bb96ff5 | 20180712 | NaN |
| 4 | None | 118 | a03223e636434f42ac4c3df47e8bac43 | 20170804 | NaN |
| 6 | None | 118 | 8ec6ce2a7e7949b1bf142def7d0e0586 | 20170925 | NaN |
| 7 | None | 118 | 68617ca6246f4fbc85e91a2a49552598 | 20171002 | NaN |
| ... | ... | ... | ... | ... | ... |
| 16980 | None | 118 | 5c686d09ca4d475a8f750f2ba07e0440 | 20160901 | NaN |
| 16982 | None | 118 | d9ca82f550ac4ee58b6299cf1e5c824a | 20160415 | NaN |
| 16989 | None | 118 | ca45ee1883624304bac1e4c8a114f045 | 20180305 | NaN |
| 16991 | None | 118 | a9a20fa8b5504360beb4e7c8712f8306 | 20160116 | NaN |
| 16994 | None | 118 | c02b10e8752c4d8e9b73f918558531f7 | 20151211 | NaN |
2175 rows × 5 columns
profile[profile['income'].isna()].age.describe()
count 2175.0 mean 118.0 std 0.0 min 118.0 25% 118.0 50% 118.0 75% 118.0 max 118.0 Name: age, dtype: float64
profile = profile[~profile['income'].isna()]
profile
| gender | age | id | became_member_on | income | |
|---|---|---|---|---|---|
| 1 | F | 55 | 0610b486422d4921ae7d2bf64640c50b | 20170715 | 112000.0 |
| 3 | F | 75 | 78afa995795e4d85b5d9ceeca43f5fef | 20170509 | 100000.0 |
| 5 | M | 68 | e2127556f4f64592b11af22de27a7932 | 20180426 | 70000.0 |
| 8 | M | 65 | 389bc3fa690240e798340f5a15918d5c | 20180209 | 53000.0 |
| 12 | M | 58 | 2eeac8d8feae4a8cad5a6af0499a211d | 20171111 | 51000.0 |
| ... | ... | ... | ... | ... | ... |
| 16995 | F | 45 | 6d5f3a774f3d4714ab0c092238f3a1d7 | 20180604 | 54000.0 |
| 16996 | M | 61 | 2cb4f97358b841b9a9773a7aa05a9d77 | 20180713 | 72000.0 |
| 16997 | M | 49 | 01d26f638c274aa0b965d24cefe3183f | 20170126 | 73000.0 |
| 16998 | F | 83 | 9dc1421481194dcd9400aec7c9ae6366 | 20160307 | 50000.0 |
| 16999 | F | 62 | e4052622e5ba45a8b96b59aba68cf068 | 20170722 | 82000.0 |
14825 rows × 5 columns
plt.figure(figsize=(15,5))
plt.hist(profile.age,bins=15,color='black');
Average age is 54-55 years
Majority of the members are between 35 - 70
Aorund 50% of members are between 42 and 66
profile.describe()
| age | became_member_on | income | |
|---|---|---|---|
| count | 14825.000000 | 1.482500e+04 | 14825.000000 |
| mean | 54.393524 | 2.016689e+07 | 65404.991568 |
| std | 17.383705 | 1.188565e+04 | 21598.299410 |
| min | 18.000000 | 2.013073e+07 | 30000.000000 |
| 25% | 42.000000 | 2.016052e+07 | 49000.000000 |
| 50% | 55.000000 | 2.017080e+07 | 64000.000000 |
| 75% | 66.000000 | 2.017123e+07 | 80000.000000 |
| max | 101.000000 | 2.018073e+07 | 120000.000000 |
profile['became_member_on'] = pd.to_datetime(profile.became_member_on,format='%Y%m%d')
profile.head()
| gender | age | id | became_member_on | income | |
|---|---|---|---|---|---|
| 1 | F | 55 | 0610b486422d4921ae7d2bf64640c50b | 2017-07-15 | 112000.0 |
| 3 | F | 75 | 78afa995795e4d85b5d9ceeca43f5fef | 2017-05-09 | 100000.0 |
| 5 | M | 68 | e2127556f4f64592b11af22de27a7932 | 2018-04-26 | 70000.0 |
| 8 | M | 65 | 389bc3fa690240e798340f5a15918d5c | 2018-02-09 | 53000.0 |
| 12 | M | 58 | 2eeac8d8feae4a8cad5a6af0499a211d | 2017-11-11 | 51000.0 |
plt.figure(figsize=(15,5))
plt.hist(profile.income,bins=10,color='orange');
profile.income.agg(['mean','max','min'])
mean 65404.991568 max 120000.000000 min 30000.000000 Name: income, dtype: float64
profile.gender.value_counts()
M 8484 F 6129 O 212 Name: gender, dtype: int64
profile.gender.value_counts().plot(kind='pie');
Classify age groups as
Under 20
20-45
46-60
61-80
profile = profile[profile.age<81]
profile['age_group'] = np.where(profile.age< 20,'Under 20',np.where(profile.age<46,'21-45',np.where(profile.age<61,'46-60','61-80')))
<ipython-input-31-ce15bff6c63b>:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy profile['age_group'] = np.where(profile.age< 20,'Under 20',np.where(profile.age<46,'21-45',np.where(profile.age<61,'46-60','61-80')))
profile.drop(['age'],axis=1,inplace=True)
/Users/karthik/opt/anaconda3/lib/python3.8/site-packages/pandas/core/frame.py:4308: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy return super().drop(
profile.rename(columns={'id':'customer_id'},inplace=True)
/Users/karthik/opt/anaconda3/lib/python3.8/site-packages/pandas/core/frame.py:4441: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy return super().rename(
profile.head()
| gender | customer_id | became_member_on | income | age_group | |
|---|---|---|---|---|---|
| 1 | F | 0610b486422d4921ae7d2bf64640c50b | 2017-07-15 | 112000.0 | 46-60 |
| 3 | F | 78afa995795e4d85b5d9ceeca43f5fef | 2017-05-09 | 100000.0 | 61-80 |
| 5 | M | e2127556f4f64592b11af22de27a7932 | 2018-04-26 | 70000.0 | 61-80 |
| 8 | M | 389bc3fa690240e798340f5a15918d5c | 2018-02-09 | 53000.0 | 61-80 |
| 12 | M | 2eeac8d8feae4a8cad5a6af0499a211d | 2017-11-11 | 51000.0 | 46-60 |
Around 57% are Males, 41 % are Females
transcript.head()
| person | event | value | time | |
|---|---|---|---|---|
| 0 | 78afa995795e4d85b5d9ceeca43f5fef | offer received | {'offer id': '9b98b8c7a33c4b65b9aebfe6a799e6d9'} | 0 |
| 1 | a03223e636434f42ac4c3df47e8bac43 | offer received | {'offer id': '0b1e1539f2cc45b7b9fa7c272da2e1d7'} | 0 |
| 2 | e2127556f4f64592b11af22de27a7932 | offer received | {'offer id': '2906b810c7d4411798c6938adc9daaa5'} | 0 |
| 3 | 8ec6ce2a7e7949b1bf142def7d0e0586 | offer received | {'offer id': 'fafdcd668e3743c1bb461111dcafc2a4'} | 0 |
| 4 | 68617ca6246f4fbc85e91a2a49552598 | offer received | {'offer id': '4d5c57ea9a6940dd891ad53e9dbe8da0'} | 0 |
transcript.shape
(306534, 4)
transcript.describe()
| time | |
|---|---|
| count | 306534.000000 |
| mean | 366.382940 |
| std | 200.326314 |
| min | 0.000000 |
| 25% | 186.000000 |
| 50% | 408.000000 |
| 75% | 528.000000 |
| max | 714.000000 |
transcript.isna().sum()
person 0 event 0 value 0 time 0 dtype: int64
px.pie(values=transcript.event.value_counts(),names=transcript.event.value_counts().index,hole=0.5)
considering members above age 85 are outliners
transcript.head()
| person | event | value | time | |
|---|---|---|---|---|
| 0 | 78afa995795e4d85b5d9ceeca43f5fef | offer received | {'offer id': '9b98b8c7a33c4b65b9aebfe6a799e6d9'} | 0 |
| 1 | a03223e636434f42ac4c3df47e8bac43 | offer received | {'offer id': '0b1e1539f2cc45b7b9fa7c272da2e1d7'} | 0 |
| 2 | e2127556f4f64592b11af22de27a7932 | offer received | {'offer id': '2906b810c7d4411798c6938adc9daaa5'} | 0 |
| 3 | 8ec6ce2a7e7949b1bf142def7d0e0586 | offer received | {'offer id': 'fafdcd668e3743c1bb461111dcafc2a4'} | 0 |
| 4 | 68617ca6246f4fbc85e91a2a49552598 | offer received | {'offer id': '4d5c57ea9a6940dd891ad53e9dbe8da0'} | 0 |
Expand the keys of the 'value' column into new columns.
def clean_transcript(df):
"""
Cleaning the transcript data frame
New column names given for expanded keys
-------
money_gained : money gained from "offer completed"
money_spent : money spent in "transaction"
offer_id
"""
#expand the dictionary to coulmns
df['offer_id'] = df['value'].apply(lambda x: x.get('offer_id'))
df['offer id'] = df['value'].apply(lambda x: x.get('offer id'))
df['money_gained'] = df['value'].apply(lambda x: x.get('reward'))
df['money_spent'] = df['value'].apply(lambda x: x.get('amount'))
#move 'offer id' values into 'offer_id'
df['offer_id'] = df.apply(lambda x : x['offer id'] if x['offer_id'] == None else x['offer_id'], axis=1)
#drop 'offer id' column
df.drop(['offer id' , 'value'] , axis=1, inplace=True)
#replace nan
df.fillna(0 , inplace=True)
return df
cleaned_transcript = clean_transcript(transcript)
cleaned_transcript.rename(columns={'person':'customer_id'},inplace=True)
cleaned_transcript
| customer_id | event | time | offer_id | money_gained | money_spent | |
|---|---|---|---|---|---|---|
| 0 | 78afa995795e4d85b5d9ceeca43f5fef | offer received | 0 | 9b98b8c7a33c4b65b9aebfe6a799e6d9 | 0.0 | 0.00 |
| 1 | a03223e636434f42ac4c3df47e8bac43 | offer received | 0 | 0b1e1539f2cc45b7b9fa7c272da2e1d7 | 0.0 | 0.00 |
| 2 | e2127556f4f64592b11af22de27a7932 | offer received | 0 | 2906b810c7d4411798c6938adc9daaa5 | 0.0 | 0.00 |
| 3 | 8ec6ce2a7e7949b1bf142def7d0e0586 | offer received | 0 | fafdcd668e3743c1bb461111dcafc2a4 | 0.0 | 0.00 |
| 4 | 68617ca6246f4fbc85e91a2a49552598 | offer received | 0 | 4d5c57ea9a6940dd891ad53e9dbe8da0 | 0.0 | 0.00 |
| ... | ... | ... | ... | ... | ... | ... |
| 306529 | b3a1272bc9904337b331bf348c3e8c17 | transaction | 714 | 0 | 0.0 | 1.59 |
| 306530 | 68213b08d99a4ae1b0dcb72aebd9aa35 | transaction | 714 | 0 | 0.0 | 9.53 |
| 306531 | a00058cf10334a308c68e7631c529907 | transaction | 714 | 0 | 0.0 | 3.61 |
| 306532 | 76ddbd6576844afe811f1a3c0fbb5bec | transaction | 714 | 0 | 0.0 | 3.53 |
| 306533 | c02b10e8752c4d8e9b73f918558531f7 | transaction | 714 | 0 | 0.0 | 4.05 |
306534 rows × 6 columns
Merge DataFrames for EDA
merge_1 = pd.merge(portfolio,cleaned_transcript, on='offer_id')
merged_df = pd.merge(merge_1,profile,on='customer_id')
merged_df.head()
| reward | difficulty | duration | offer_type | offer_id | mobile | social | web | customer_id | event | time | money_gained | money_spent | gender | became_member_on | income | age_group | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 10 | 10 | 7 | bogo | ae264e3637204a6fb9bb56bc8210ddfd | 1 | 1 | 1 | 0 | 4b0da7e80e5945209a1fdddfe813dbe0 | offer received | 0 | 0.0 | 0.0 | M | 2017-09-09 | 100000.0 | 61-80 |
| 1 | 10 | 10 | 7 | bogo | ae264e3637204a6fb9bb56bc8210ddfd | 1 | 1 | 1 | 0 | 4b0da7e80e5945209a1fdddfe813dbe0 | offer viewed | 102 | 0.0 | 0.0 | M | 2017-09-09 | 100000.0 | 61-80 |
| 2 | 10 | 10 | 7 | bogo | ae264e3637204a6fb9bb56bc8210ddfd | 1 | 1 | 1 | 0 | 4b0da7e80e5945209a1fdddfe813dbe0 | offer received | 504 | 0.0 | 0.0 | M | 2017-09-09 | 100000.0 | 61-80 |
| 3 | 10 | 10 | 7 | bogo | ae264e3637204a6fb9bb56bc8210ddfd | 1 | 1 | 1 | 0 | 4b0da7e80e5945209a1fdddfe813dbe0 | offer viewed | 510 | 0.0 | 0.0 | M | 2017-09-09 | 100000.0 | 61-80 |
| 4 | 10 | 10 | 7 | bogo | ae264e3637204a6fb9bb56bc8210ddfd | 1 | 1 | 1 | 0 | 4b0da7e80e5945209a1fdddfe813dbe0 | offer completed | 510 | 10.0 | 0.0 | M | 2017-09-09 | 100000.0 | 61-80 |
merged_df.isna().sum()
reward 0 difficulty 0 duration 0 offer_type 0 offer_id 0 email 0 mobile 0 social 0 web 0 customer_id 0 event 0 time 0 money_gained 0 money_spent 0 gender 0 became_member_on 0 income 0 age_group 0 dtype: int64
merged_df.describe()
| reward | difficulty | duration | mobile | social | web | time | money_gained | money_spent | income | ||
|---|---|---|---|---|---|---|---|---|---|---|---|
| count | 138727.000000 | 138727.000000 | 138727.000000 | 138727.0 | 138727.000000 | 138727.000000 | 138727.000000 | 138727.000000 | 138727.000000 | 138727.0 | 138727.000000 |
| mean | 4.442812 | 7.896884 | 6.628659 | 1.0 | 0.916880 | 0.657810 | 0.806937 | 354.380560 | 1.072437 | 0.0 | 65994.802742 |
| std | 3.370704 | 5.043375 | 2.133634 | 0.0 | 0.276065 | 0.474445 | 0.394703 | 198.323238 | 2.444492 | 0.0 | 21382.444305 |
| min | 0.000000 | 0.000000 | 3.000000 | 1.0 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.0 | 30000.000000 |
| 25% | 2.000000 | 5.000000 | 5.000000 | 1.0 | 1.000000 | 0.000000 | 1.000000 | 168.000000 | 0.000000 | 0.0 | 50000.000000 |
| 50% | 5.000000 | 10.000000 | 7.000000 | 1.0 | 1.000000 | 1.000000 | 1.000000 | 408.000000 | 0.000000 | 0.0 | 64000.000000 |
| 75% | 5.000000 | 10.000000 | 7.000000 | 1.0 | 1.000000 | 1.000000 | 1.000000 | 510.000000 | 0.000000 | 0.0 | 80000.000000 |
| max | 10.000000 | 20.000000 | 10.000000 | 1.0 | 1.000000 | 1.000000 | 1.000000 | 714.000000 | 10.000000 | 0.0 | 120000.000000 |
merged_df.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 138727 entries, 0 to 138726 Data columns (total 18 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 reward 138727 non-null int64 1 difficulty 138727 non-null int64 2 duration 138727 non-null int64 3 offer_type 138727 non-null object 4 offer_id 138727 non-null object 5 email 138727 non-null int64 6 mobile 138727 non-null int64 7 social 138727 non-null int64 8 web 138727 non-null int64 9 customer_id 138727 non-null object 10 event 138727 non-null object 11 time 138727 non-null int64 12 money_gained 138727 non-null float64 13 money_spent 138727 non-null float64 14 gender 138727 non-null object 15 became_member_on 138727 non-null datetime64[ns] 16 income 138727 non-null float64 17 age_group 138727 non-null object dtypes: datetime64[ns](1), float64(3), int64(8), object(6) memory usage: 20.1+ MB
plt.figure(figsize=(15,6))
merged_df['offer_type'].value_counts().plot.barh(title=' Distribution of offer types');
plt.figure(figsize=(15,6))
plt.hist(merged_df['income'], bins=50,color='b');
plt.figure(figsize=(15,6))
merged_df['event'].value_counts().plot.barh(title=' Distribution of offers');
plt.figure(figsize=(15, 6))
sns.countplot(x= merged_df['age_group'], hue= merged_df['gender']);
plt.figure(figsize=(15, 6))
sns.countplot(x= merged_df['offer_type'], hue= merged_df['gender'],edgecolor=sns.color_palette("dark", 4));
plt.figure(figsize=(15, 6))
sns.countplot(x= merged_df['event'], hue= merged_df['gender'],edgecolor=sns.color_palette("dark", 4));
plt.figure(figsize=(15,6))
sns.countplot(x= merged_df['event'], hue= merged_df['offer_type'],edgecolor=sns.color_palette("dark", 3));
plt.figure(figsize=(15, 6))
sns.countplot(x= merged_df['age_group'], hue= merged_df['event'],edgecolor=sns.color_palette("dark", 3));
merged_df.head()
| reward | difficulty | duration | offer_type | offer_id | mobile | social | web | customer_id | event | time | money_gained | money_spent | gender | became_member_on | income | age_group | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 10 | 10 | 7 | bogo | ae264e3637204a6fb9bb56bc8210ddfd | 1 | 1 | 1 | 0 | 4b0da7e80e5945209a1fdddfe813dbe0 | offer received | 0 | 0.0 | 0.0 | M | 2017-09-09 | 100000.0 | 61-80 |
| 1 | 10 | 10 | 7 | bogo | ae264e3637204a6fb9bb56bc8210ddfd | 1 | 1 | 1 | 0 | 4b0da7e80e5945209a1fdddfe813dbe0 | offer viewed | 102 | 0.0 | 0.0 | M | 2017-09-09 | 100000.0 | 61-80 |
| 2 | 10 | 10 | 7 | bogo | ae264e3637204a6fb9bb56bc8210ddfd | 1 | 1 | 1 | 0 | 4b0da7e80e5945209a1fdddfe813dbe0 | offer received | 504 | 0.0 | 0.0 | M | 2017-09-09 | 100000.0 | 61-80 |
| 3 | 10 | 10 | 7 | bogo | ae264e3637204a6fb9bb56bc8210ddfd | 1 | 1 | 1 | 0 | 4b0da7e80e5945209a1fdddfe813dbe0 | offer viewed | 510 | 0.0 | 0.0 | M | 2017-09-09 | 100000.0 | 61-80 |
| 4 | 10 | 10 | 7 | bogo | ae264e3637204a6fb9bb56bc8210ddfd | 1 | 1 | 1 | 0 | 4b0da7e80e5945209a1fdddfe813dbe0 | offer completed | 510 | 10.0 | 0.0 | M | 2017-09-09 | 100000.0 | 61-80 |
merged_df.shape
(138727, 18)
Convert categorical variables such as gender, event,offer type and age groups to Numerical
Encode the 'event' data to numerical values:
offer received ---> 1
offer viewed ---> 2
offer completed ---> 3
Encode offer_id and customer_id.
Split 'became_member_on' column and create separate columns for month and year.
Scale and normalize numerical data.
def cleaned_merged_df(df):
"""
Clean merged data frame for ML modeling
Parameters
----------
df: input DataFrame
Returns
-------
df: cleaned DataFrame
"""
#process categorical variables
categorical = ['offer_type', 'gender', 'age_group']
df = pd.get_dummies(df, columns = categorical)
# Add month and year columns
df['month_member'] = df['became_member_on'].apply(lambda x: x.day)
df['year_member'] = df['became_member_on'].apply(lambda x: x.year)
#drop became_member_on column
df.drop('became_member_on',axis=1, inplace=True)
#process offer_id column
offerids = df['offer_id'].unique().tolist()
o_mapping = dict( zip(offerids,range(len(offerids))) )
df.replace({'offer_id': o_mapping},inplace=True)
#process customer_id column
cusids = df['customer_id'].unique().tolist()
c_mapping = dict( zip(cusids,range(len(cusids))) )
df.replace({'customer_id': c_mapping},inplace=True)
#process numerical variables
#initialize a MinMaxScaler, then apply it to the features
scaler = MinMaxScaler() # default=(0, 1)
numerical = ['income', 'difficulty', 'duration', 'reward', 'time', 'money_gained', 'money_spent']
df[numerical] = scaler.fit_transform(df[numerical])
#encode 'event' data to numerical values according to task 2
df['event'] = df['event'].map({'offer received':1, 'offer viewed':2, 'offer completed':3})
return df
cleaned_data = cleaned_merged_df(merged_df)
cleaned_data.head()
| reward | difficulty | duration | offer_id | mobile | social | web | customer_id | event | ... | offer_type_informational | gender_F | gender_M | gender_O | age_group_21-45 | age_group_46-60 | age_group_61-80 | age_group_Under 20 | month_member | year_member | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 1.0 | 0.5 | 0.571429 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | ... | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 9 | 2017 |
| 1 | 1.0 | 0.5 | 0.571429 | 0 | 1 | 1 | 1 | 0 | 0 | 2 | ... | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 9 | 2017 |
| 2 | 1.0 | 0.5 | 0.571429 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | ... | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 9 | 2017 |
| 3 | 1.0 | 0.5 | 0.571429 | 0 | 1 | 1 | 1 | 0 | 0 | 2 | ... | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 9 | 2017 |
| 4 | 1.0 | 0.5 | 0.571429 | 0 | 1 | 1 | 1 | 0 | 0 | 3 | ... | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 9 | 2017 |
5 rows × 26 columns
cleaned_data.shape
(138727, 26)
cleaned_data.describe()
| reward | difficulty | duration | offer_id | mobile | social | web | customer_id | event | ... | offer_type_informational | gender_F | gender_M | gender_O | age_group_21-45 | age_group_46-60 | age_group_61-80 | age_group_Under 20 | month_member | year_member | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| count | 138727.000000 | 138727.000000 | 138727.000000 | 138727.000000 | 138727.0 | 138727.000000 | 138727.000000 | 138727.000000 | 138727.000000 | 138727.000000 | ... | 138727.000000 | 138727.000000 | 138727.000000 | 138727.000000 | 138727.000000 | 138727.000000 | 138727.000000 | 138727.000000 | 138727.000000 | 138727.000000 |
| mean | 0.444281 | 0.394844 | 0.518380 | 4.574135 | 1.0 | 0.916880 | 0.657810 | 0.806937 | 6673.650248 | 1.769490 | ... | 0.151917 | 0.417684 | 0.567409 | 0.014907 | 0.290809 | 0.361941 | 0.333605 | 0.013646 | 15.891031 | 2016.576232 |
| std | 0.337070 | 0.252169 | 0.304805 | 2.788421 | 0.0 | 0.276065 | 0.474445 | 0.394703 | 3960.069028 | 0.781956 | ... | 0.358942 | 0.493179 | 0.495437 | 0.121181 | 0.454137 | 0.480564 | 0.471502 | 0.116015 | 8.718763 | 1.193368 |
| min | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.0 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | ... | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | 2013.000000 |
| 25% | 0.200000 | 0.250000 | 0.285714 | 2.000000 | 1.0 | 1.000000 | 0.000000 | 1.000000 | 3241.000000 | 1.000000 | ... | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 8.000000 | 2016.000000 |
| 50% | 0.500000 | 0.500000 | 0.571429 | 5.000000 | 1.0 | 1.000000 | 1.000000 | 1.000000 | 6547.000000 | 2.000000 | ... | 0.000000 | 0.000000 | 1.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 16.000000 | 2017.000000 |
| 75% | 0.500000 | 0.500000 | 0.571429 | 7.000000 | 1.0 | 1.000000 | 1.000000 | 1.000000 | 10092.000000 | 2.000000 | ... | 0.000000 | 1.000000 | 1.000000 | 0.000000 | 1.000000 | 1.000000 | 1.000000 | 0.000000 | 23.000000 | 2017.000000 |
| max | 1.000000 | 1.000000 | 1.000000 | 9.000000 | 1.0 | 1.000000 | 1.000000 | 1.000000 | 13834.000000 | 3.000000 | ... | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 31.000000 | 2018.000000 |
8 rows × 26 columns
Let's split the the data into train and test sets.
We will train a Machine Learning model with the train set and test the model performance on the test set
X = cleaned_data.drop('event',axis=1)
y = cleaned_data['event']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,random_state=2)
print("We have {} rows for training and {} rows for testing.".format(len(X_train), len(X_test)))
We have 97108 rows for training and 41619 rows for testing.
We will work with few different models and decide on one model after checking their performance
model_rf = RandomForestClassifier()
model_dt = DecisionTreeClassifier()
model_knn = KNeighborsClassifier(n_neighbors=5)
model_ada = AdaBoostClassifier()
model_rf.fit(X_train,y_train)
RandomForestClassifier()
y_pred = model_rf.predict(X_test)
print("Classification report for Random Forest Model",'\n')
print(metrics.classification_report(y_test,y_pred))
Classification report for Random Forest Model
precision recall f1-score support
1 0.64 0.70 0.67 18636
2 0.55 0.48 0.51 13946
3 1.00 1.00 1.00 9037
accuracy 0.69 41619
macro avg 0.73 0.73 0.73 41619
weighted avg 0.69 0.69 0.69 41619
model_dt.fit(X_train,y_train)
DecisionTreeClassifier()
y_pred = model_dt.predict(X_test)
print("Classification report for Decision Tree Model",'\n')
print(metrics.classification_report(y_test,y_pred))
Classification report for Decision Tree Model
precision recall f1-score support
1 0.84 0.83 0.84 18636
2 0.78 0.79 0.79 13946
3 1.00 1.00 1.00 9037
accuracy 0.86 41619
macro avg 0.87 0.87 0.87 41619
weighted avg 0.86 0.86 0.86 41619
model_knn.fit(X_train,y_train)
KNeighborsClassifier()
y_pred = model_knn.predict(X_test)
print("Classification report for K-Nearest Neighbours Model",'\n')
print(metrics.classification_report(y_test,y_pred))
Classification report for K-Nearest Neighbours Model
precision recall f1-score support
1 0.37 0.59 0.45 18636
2 0.15 0.11 0.13 13946
3 0.17 0.02 0.04 9037
accuracy 0.30 41619
macro avg 0.23 0.24 0.21 41619
weighted avg 0.25 0.30 0.25 41619
model_ada.fit(X_train,y_train)
AdaBoostClassifier()
y_pred = model_ada.predict(X_test)
print("Classification report for AdaBooster Model",'\n')
print(metrics.classification_report(y_test,y_pred))
Classification report for AdaBooster Model
precision recall f1-score support
1 0.85 1.00 0.92 18636
2 1.00 0.76 0.86 13946
3 1.00 1.00 1.00 9037
accuracy 0.92 41619
macro avg 0.95 0.92 0.93 41619
weighted avg 0.93 0.92 0.92 41619
We can also use an algorithm called GridSearchCV and RandomSearchCV to find the apt choice of best hyperparamets to fit our ML models. Since some of our models produce acceptable score we can stop here.
d = {'color': ['C0', "#e74c3c"]}
g = sns.FacetGrid(data = merged_df[merged_df['gender'] != 'O'], row='event', col='gender', hue_kws=d, hue='gender', height=5)
g.map(plt.hist, 'offer_type')
<seaborn.axisgrid.FacetGrid at 0x7fefeabc6eb0>
part - i
Male customers represent more than 60% of the customer database and they use Starbucks app more than the female customers
Age group 46-60 use the app most amonh male and females.
Discount offers are more preferred by the customers
There are less number of customers who actually complete the offer as compared to the ones who just view & ignore it
We can look more to the figures & information in the EDA section to understand what kind of offers to send to the customers
Discount and BOGO increases the customer purchasing rate
part - ii
print("Decicion Tree gave us an F1 score of {}".format(model_dt.score(X_test,y_test)),'\n')
print("Random Forest Model gave us an F1 score of {}".format(model_rf.score(X_test,y_test)),'\n')
print("K-Nearest Neighbor Model gave us an F1 score of 0.30",'\n')
print("AdaBooster Model gave us an F1 score of {}".format(model_ada.score(X_test,y_test)),'\n')
Decicion Tree gave us an F1 score of 0.8551863331651409 Random Forest Model gave us an F1 score of 0.6915110886854562 K-Nearest Neighbor Model gave us an F1 score of 0.30 AdaBooster Model gave us an F1 score of 0.9181623777601576